Machine Learning and Non-Negative Compressive Sampling

نویسندگان

  • Paul D. O’Grady
  • Scott T. Rickard
چکیده

The new emerging theory of compressive sampling demonstrates that by exploiting the structure of a signal, it is possible to sample a signal below the Nyquist rate—using random projections—and achieve perfect reconstruction. In this paper, we consider a special case of compressive sampling where the uncompressed signal is non-negative, and propose a number of sparse recovery algorithms—which utilise Non-negative Matrix Factorisation (NMF), Iteratively Reweighted Least Squares (IRLS) & Nonnegative Quadratic Programming (NQP)—for the recovery of minimum `p-norm solutions, 0 ≤ p ≤ 1. We examine the performance of NMF when applied to compressed data and discuss consequences for machine learning algorithms applied to random projections of the data to be analysed. As a necessary first step, we investigate the signal recovery performance of the proposed algorithms, and demonstrate that— for sufficiently sparse non-negative signals—the signals recovered by the sparse recovery algorithms and their least squares versions are essentially the same, which suggests that a non-negativity constraint is enough to recover sufficiently sparse signals. We build on our results and extend Non-negative Matrix Factorisation to compressively sampled data, where a sparse non-negative basis and corresponding non-negative coefficients for the original uncompressed matrix are learned indirectly in the compressed domain. Thus, demonstrating that machine learning algorithms that employ non-negative constraints can successfully learn uncompressed features from compressed data—by way of a simple modification of the generative model.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Compressive hyperspectral imaging via adaptive sampling and dictionary learning

In this paper, we propose a new sampling strategy for hyperspectral signals that is based on dictionary learning and singular value decomposition (SVD). Specifically, we first learn a sparsifying dictionary from training spectral data using dictionary learning. We then perform an SVD on the dictionary and use the first few left singular vectors as the rows of the measurement matrix to obtain th...

متن کامل

AN ABSTRACT OF THE THESIS OF Adem Zaid for the degree of Master of Science in Computer Science presented on March 15, 2017. Title: Leveraging Compressive Sampling and Machine Learning for Adaptive and Cooperative Wideband Spectrum Sensing Abstract approved:

approved: Bechir Hamdaoui This thesis proposes a novel technique that exploits spectrum occupancy behaviors inherent to wideband spectrum access to enable efficient cooperative spectrum sensing. The proposed technique reduces the number of required sensing measurements while accurately recovering spectrum occupancy information. It does so by leveraging compressive sampling theory to exploit the...

متن کامل

Active Learning and Adaptive Sampling for Non-Parametric Inference

Active Learning and Adaptive Sampling for Non-Parametric Inference by Rui M. Castro This thesis presents a general discussion of active learning and adaptive sampling. In many practical scenarios it is possible to use information gleaned from previous observations to focus the sampling process, in the spirit of the ”twenty-questions” game. As more samples are collected one can learn how to impr...

متن کامل

A Novel Face Detection Method Based on Over-complete Incoherent Dictionary Learning

In this paper, face detection problem is considered using the concepts of compressive sensing technique. This technique includes dictionary learning procedure and sparse coding method to represent the structural content of input images. In the proposed method, dictionaries are learned in such a way that the trained models have the least degree of coherence to each other. The novelty of the prop...

متن کامل

Instance Sampling Methods for Pronoun Resolution

Instance sampling is a method to balance extremely skewed training sets as they occur, for example, in machine learning settings for anaphora resolution. Here, the number of negative samples (i.e. non-anaphoric pairs) is usually substantially larger than the number of positive samples. This causes classifiers to be biased towards negative classification, leading to subopti-

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009